翻訳と辞書
Words near each other
・ Proxies (film)
・ Proxify
・ Proxilodon
・ Proxim
・ Proxim Wireless
・ Proxima
・ Proxima Centauri
・ Proxima Centauri (album)
・ Proxima Centauri (disambiguation)
・ Proxima Centauri (short story)
・ Proxima Midnight
・ Proximal 18q-
・ Proximal convoluted tubule
・ Proximal diabetic neuropathy
・ Proximal femoral focal deficiency
Proximal gradient method
・ Proximal gradient methods for learning
・ Proximal radioulnar articulation
・ Proximal renal tubular acidosis
・ Proximal subungual onychomycosis
・ Proximate
・ Proximate and ultimate causation
・ Proximate cause
・ Proximative case
・ Proximedia Group
・ Proximic
・ Proximity (film)
・ Proximity analysis
・ Proximity card
・ Proximity communication


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Proximal gradient method : ウィキペディア英語版
Proximal gradient method

Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of form
:
\operatorname_ \qquad f_1(x) +f_2(x) + \cdots+ f_(x) +f_n(x)

where f_1, f_2, ..., f_n are convex functions defined from f: \mathbb^N \rightarrow \mathbb
where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like
Steepest descent method, conjugate gradient method etc. There is a specific class of algorithms which can solve above optimization problem. These methods proceed by splitting,
in that the functions f_1, . . . , f_n are used individually so as to yield an easily implementable algorithm.
They are called proximal because each non smooth function among f_1, . . . , f_n is involved via its proximity
operator. Iterative Shrinkage thresholding algorithm, projected Landweber, projected
gradient, alternating projections, alternating-direction method of multipliers, alternating
split Bregman are special instances of proximal algorithms. Details of proximal methods are discussed in Combettes and Pesquet.〔
〕 For the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning.
== Notations and terminology ==
Let \mathbb^N, the N-dimensional euclidean space, be the domain of the function
f: \mathbb^N \rightarrow (-\infty,+\infty]. Suppose C is a non-empty
convex subset of \mathbb^N. Then, the indicator function of C is defined as
: i_C : x \mapsto
\begin
0 & \text x \in C \\
+ \infty & \text x \notin C
\end

: p-norm is defined as ( \| \cdot \|_p )
:
\|x\|_p = ( |x_1|^p + |x_2|^p + \cdots + |x_N|^p )^

The distance from x \in \mathbb^N to C is defined as
:
D_C(x) = \min_ \|x - y\|

If C is closed and convex, the projection of x \in \mathbb^N onto C is the unique point
P_Cx \in C such that D_C(x) = \| x - P_Cx \|_2 .
The subdifferential of f is given by
:
\partial f = \^N, (y-x)^\mathrmu+f(x) \leq f(y)).\}


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Proximal gradient method」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.